Current:Home > StocksThe Excerpt podcast: AI has been unleashed. Should we be concerned?-Angel Dreamer Wealth Society D1 Reviews & Insights

The Excerpt podcast: AI has been unleashed. Should we be concerned?

​​​​​​​View Date:2024-12-24 03:36:00

On Sunday's episode of The Excerpt podcast: The unleashing of powerful Artificial Intelligence into the world, with little to any regulation or guardrails, has put many people on edge. It holds tremendous promise in all sorts of fields from healthcare to law enforcement, but it also poses many risks. How worried should we be? To help us dig into it, we're joined by Vince Conitzer, Head of Technical AI Engagement at the Institute for Ethics in AI at the University of Oxford.

Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.

Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here

Dana Taylor:

Hello and welcome to The Excerpt. I'm Dana Taylor. Today is Sunday, January 28th, 2024.

The unleashing of powerful artificial intelligence into the world with little to any regulation or guardrails, has put many people on edge. Its nascent use and integration in everything from healthcare to law enforcement to war, has already shown us the tremendous dangers it poses. How worried should we be? I'm joined now by someone who can help us understand the risks AI can pose and hopefully how we can mitigate those risks. Vincent Conitzer is a computer science professor at Carnegie Mellon University, Professor of CS and Philosophy and Head of Technical AI engagement at the Institute for Ethics in AI at the University of Oxford. He's also the co-author of a book coming out February 8th, Moral AI and How we Get There. Vincent, thanks for joining me.

Vincent Conitzer:

Thank you for having me.

Dana Taylor:

So I want to start with Oxford's unique approach to ethics in AI and why that's important. What does philosophy have to teach us about how to approach AI regulation?

Vincent Conitzer:

So as you said, we see that AI is touching so many different areas of life right now. I am trained primarily as a computer scientist, but that training isn't really designed exactly for thinking about how we want to deploy AI into the world and what constraints we want to put on that. Traditionally in computer science, we've been focused primarily on what can we actually do? How do we make these systems work better? And that, of course is important. But now as we see AI being deployed in the world, it raises a lot of questions about how we actually want to see this technology deployed. What do we want to allow? What do we want to disallow? What kind of features do we want the technology to have? And some of those will come back and become technical questions in terms of how can we change the technology so that it works in a more desirable way?

But first of all, there's the question of what we actually want to see from the technology in the first place, and also what we do not want to see. And I think it's really important that that conversation includes not only computer scientists, but really people from a wide variety of disciplines. It requires expertise in law, in the social sciences, in medicine, and so on. And it's just too much for one field to do by itself. I think philosophy is a very natural starting point because it is also so broad. But the philosophers at the Institute for Ethics and AI actually are also very wide-ranging and have a lot of different focused expertise. And of course, we also bring people from other disciplines into the institute.

Dana Taylor:

I think we can all appreciate the promise of AI in fields like healthcare, law enforcement and even military settings, but there's also great concern in those same areas. Let's start with healthcare. Two recently filed lawsuits claim that an AI tool denied care that would've kept loved ones alive. How much danger does relying on AI in the healthcare field pose?

Vincent Conitzer:

So that's a great question. AI in healthcare is in many ways also very promising, but there are also concerns about this. When we see these technologies being evaluated and deployed, you have to be very careful because just that somebody in a study claims that they got very good results doesn't mean that in practice it would work just as well. Often there are differences in how it's introduced into practice, where maybe images are taken in a somewhat different way, the setup is somehow different. Maybe it also doesn't work as well for the population that's actually being treated. Maybe there is a racial bias in data that they used to train the system in the first place, and then it doesn't work as well on a different population, and that's a concern to have. So in general, we need to be very careful in how these systems are evaluated.

Dana Taylor:

Okay. Let's pivot to AI and law enforcement. You've written extensively about AI's use in facial recognition. Many of us think, great if it helps us catch a criminal, that's perfect. But what are some of the ethical and legal concerns here?

Vincent Conitzer:

So the use of AI in law enforcement is controversial, particularly the use of facial recognition because it doesn't always work as well as advertised. As you said, ideally, if we can use that to catch a criminal and put somebody behind bars who deserves that, that's a great thing. But there are lots of concerns with how this technology actually works. We know from lots of studies, that facial recognition does not work as well for everybody. In particular, people with darker skin generally tend to not be processed as well by this technology as people with lighter skin. So there's a number of reasons for that. And this is particularly a concern in the use of this technology in law enforcement.

I saw a story recently that apparently some police departments are now actually using another tool, which is to start with somebody's DNA. There are companies that claim to then be able to generate a picture of the face of the person based on the DNA, and then they take that face and again, run it through facial recognition. I think most of us in the field would react to that very nervously because now you have two systems, both of which has failure modes, you're stacking on top of each other. And so that's a concern.

Dana Taylor:

And Vincent, we can hardly talk about the dangers of AI without talking about its use in armed combat. According to The Guardian, Israeli defense forces have admitted deploying a target creation tool called The Gospel to make "Targeting choices with life and death consequences." What's your biggest concern when you think about AI's use in war?

Vincent Conitzer:

So there's a lot of concern about autonomous weapons. This is usually imagined as drones that are not operated by a human being, but are actually making decisions themselves using AI. Often we like to have a human in the loop in making these kind of decisions. That makes us a little bit more comfortable to know that at least a human being at some point in the process is helping to make this decision. But there are a lot of questions about how to do this right. You may have a human being in the loop that approves a decision, but there are a lot of details to be worked out about what is a responsible way to put the human in the decision? Because if in the end, the human feels pressured to approve every single decision, then maybe it's not particularly meaningful to have the human in the loop.

And then there's a lot of questions about how are these decisions being made from an ethical perspective. If we rely on the human to do all the ethical reasoning, but meanwhile don't put the human in a position where they can do that well, that's not a good setup. Instead, if we rely on the AI to do the moral reasoning itself, that raises a lot of other questions like whether the AI system really is able to make such decisions well.

Dana Taylor:

So it's obviously a big year for politics. One story from six years ago was how Cambridge Analytica used AI to illegally harvest tens of millions of Facebook profiles to try to sway people in the 2016 election. We also have generative AI now that can mimic a candidate on audio and video, to spread misinformation or disinformation. Can anything be done to ensure election integrity?

Vincent Conitzer:

Yes. So for us in the US with upcoming elections, this is very much at the top of our minds, but of course this is happening in elections all around the world, where we see that some parties are trying to interfere with the election through misinformation, and sometimes that misinformation is AI generated as well. For example, we actually saw just now in New Hampshire, that some voters were receiving robocalls apparently from President Joe Biden telling him not to go vote. And in fact, it wasn't President Biden. It seems like it was AI generated. And we know that this is now possible, that we can now mimic somebody's voice. We can even mimic their appearance, make fake videos, and this is becoming very challenging. And so we have to somehow be able to ourselves tell that this is AI generated. And there's still some telltale signs that you can find, for example, looking at somebody's eyes in the video.

But it seems likely that this technology will just become better and better, and at least to the naked eye, it will become very hard for people to be able to tell that it's a deep fake. And so this, for us as citizens, requires also that we start to think more about where is our information coming from? Why can we trust the information that we're receiving, or should we be skeptical of the information coming in? We can't necessarily trust what we see. We, of course, also can't necessarily trust what we read, but that's maybe to some extent more familiar.

Another concern is that some parties may start to be able to build very detailed models of us as individuals. This can be done with the help of AI, sometimes through data leaks, sometimes even through legitimately procured data that can lead to very effective advertising if we get to a world where we all get very personalized information sent to us. But in the context of elections, it's particularly concerning also because of course there are people in the world that really want to steer elections in a particular way, including people from outside that country.

Dana Taylor:

I want to turn to something that concerns all of us and that's fraud. What are some of the ways AI is being used to trick people into giving either money or access to sensitive information or both? And more importantly, how can we guard against this?

Vincent Conitzer:

So one concern is phishing attacks where somebody tries to get sensitive information from me, like password or access to a system. And the concern here with AI in particular, is that it may make it possible to, on a much larger scale, conduct fairly sophisticated phishing attacks. We've all gotten these emails that are very generic, that are trying to get us to do something, but we can easily tell that this is just something very not personalized. With AI, you could maybe get much more targeted information. The one concern with AI is that this could be done much cheaper, on a much greater scale. We also, again, see the use of deep fakes. So we've already seen some of these examples where somebody receives a call that is ostensibly from their child and their child tells them, "Oh, these men have me. You need to transfer this amount of money to this account," which is of course, extremely concerning, but they're often just deep fakes while the child is sitting in the next room, hopefully, so that the parent actually knows. But this is very concerning.

Dana Taylor:

Well, Vincent, we've so far been focused on talking about all the dangers that AI poses, but there's also the promise it holds in solving all kinds of problems that could make our lives demonstrably better. Can you share your thoughts on that?

Vincent Conitzer:

Absolutely, and of course, this is why so many of us went into AI research in the first place, because we felt it has the opportunity to make the world a better place. And we already talked about applications of AI in healthcare. There's of course, also applications of AI to the environment because there we're facing lots of challenges as well. In the sciences, again, this interacts with each other as well, but maybe in material sciences, we've seen this with protein folding. So AI can really help us to learn a lot about the world as well, as well as potentially to improve it. But it's really important that it's deployed in a sensible way and that we keep an eye on the dangers that result from this as well. Somebody could use some of the latest AI systems to actually design better viruses or toxic chemicals, and so there's a lot of work to be done there as well.

Dana Taylor:

We've talked about a lot of risks with AI, but there's very little regulation here in the US to help us guard against some of these things. Can regulation effectively help us mitigate these risks?

Vincent Conitzer:

The tricky thing is that AI is now touching on so many different areas of life, and in one way, it seems to make sense to focus on the specific application, that using AI to make employment decisions requires different kind of regulation than the use of AI in healthcare and so on. So that's one route that we can have these very tailored laws that focus specifically on the use of AI in one kind of domain. At the same time, it also seems to make sense that we should think about regulating AI potentially at a higher level. This actually is to some extent, motivated by what we recently see in AI, that some of these very, very large models become extremely effective at doing lots of things and start to get integrated into lots of downstream applications. I do have good hopes for regulation, but it is challenging and it requires, I think, a lot of resources and attention to be able to do it right and quickly and adaptively enough for it to actually be effective.

Dana Taylor:

Vincent, thanks so much for joining me on The Excerpt.

Vincent Conitzer:

Thank you so much for having me.

Dana Taylor:

Thanks to our senior producer Shannon Rae Green for production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to [email protected]. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.

veryGood! (5699)

Tags